jensen-shannon divergence
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.96)
- Information Technology > Artificial Intelligence > Natural Language (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > Puerto Rico > San Juan > San Juan (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (2 more...)
A Human Behavioral Baseline for Collective Governance in Software Projects
Noori, Mobina, Chakraborti, Mahasweta, Zhang, Amy X, Frey, Seth
We study how open source communities describe participation and control through version controlled governance documents. Using a corpus of 710 projects with paired snapshots, we parse text into actors, rules, actions, and objects, then group them and measure change with entropy for evenness, richness for diversity, and Jensen Shannon divergence for drift. Projects define more roles and more actions over time, and these are distributed more evenly, while the composition of rules remains stable. These findings indicate that governance grows by expanding and balancing categories of participation without major shifts in prescriptive force. The analysis provides a reproducible baseline for evaluating whether future AI mediated workflows concentrate or redistribute authority.
- North America > United States > California > Yolo County > Davis (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (3 more...)
- Social Sector (0.46)
- Law (0.46)
T3: Test-Time Model Merging in VLMs for Zero-Shot Medical Imaging Analysis
Imam, Raza, Wang, Hu, Mahapatra, Dwarikanath, Yaqub, Mohammad
In medical imaging, vision-language models face a critical duality: pretrained networks offer broad robustness but lack subtle, modality-specific characteristics, while fine-tuned expert models achieve high in-distribution accuracy yet falter under modality shift. Existing model-merging techniques, designed for natural-image benchmarks, are simple and efficient but fail to deliver consistent gains across diverse medical modalities; their static interpolation limits reliability in varied clinical tasks. To address this, we introduce Test-Time Task adaptive merging (T^3), a backpropagation-free framework that computes per-sample interpolation coefficients via the Jensen-Shannon divergence between the two models' output distributions. T^3 dynamically preserves local precision when models agree and defers to generalist robustness under drift. To overcome the inference costs of sample-wise merging, we further propose a batch-wise extension, T^3_B, that computes a merging coefficient across a batch of samples, dramatically reducing computational bottleneck. Recognizing the lack of a standardized medical-merging benchmark, we present a rigorous cross-evaluation protocol spanning in-domain, base-to-novel, and corruptions across four modalities. Empirically, T^3 sets new state-of-the-art in Top-1 accuracy and error reduction, outperforming strong baselines while maintaining efficiency, paving the way for adaptive MVLM deployment in clinical settings. Our code is available at https://github.com/Razaimam45/TCube.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > Switzerland (0.04)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Sensing and Signal Processing > Image Processing (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.66)
Transferable Generative Models Bridge Femtosecond to Nanosecond Time-Step Molecular Dynamics
Diez, Juan Viguera, Schreiner, Mathias, Olsson, Simon
Understanding molecular structure, dynamics, and reactivity requires bridging processes that occur across widely separated time scales. Conventional molecular dynamics simulations provide atomistic resolution, but their femtosecond time steps limit access to the slow conformational changes and relaxation processes that govern chemical function. Here, we introduce a deep generative modeling framework that accelerates sampling of molecular dynamics by four orders of magnitude while retaining physical realism. Applied to small organic molecules and peptides, the approach enables quantitative characterization of equilibrium ensembles and dynamical relaxation processes that were previously only accessible by costly brute-force simulation. Importantly, the method generalizes across chemical composition and system size, extrapolating to peptides larger than those used for training, and captures chemically meaningful transitions on extended time scales. By expanding the accessible range of molecular motions without sacrificing atomistic detail, this approach opens new opportunities for probing conformational landscapes, thermodynamics, and kinetics in systems central to chemistry and biophysics.
- Europe > Sweden > Vaestra Goetaland > Gothenburg (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
Motivation of the method
Following Reviewer #3, we clarify the motivation behind NC. General changes to the manuscript Following Reviewer's #1 suggestion, we included in the Appendix our experi-5 Our intent was to reason about optimal discriminators. Following Reviewer's #1 and #3 remarks, we replace the Donsker-V aradhan lower We thank the reviewers for their careful reading. We then use Jensen's inequality with uniform We follow Reviewer's #1 suggestion to
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > Puerto Rico > San Juan > San Juan (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (2 more...)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.96)
- Information Technology > Artificial Intelligence > Natural Language (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
Two tales for a geometric Jensen--Shannon divergence
The geometric Jensen--Shannon divergence (G-JSD) gained popularity in machine learning and information sciences thanks to its closed-form expression between Gaussian distributions. In this work, we introduce an alternative definition of the geometric Jensen--Shannon divergence tailored to positive densities which does not normalize geometric mixtures. This novel divergence is termed the extended G-JSD as it applies to the more general case of positive measures. We report explicitly the gap between the extended G-JSD and the G-JSD when considering probability densities, and show how to express the G-JSD and extended G-JSD using the Jeffreys divergence and the Bhattacharyya distance or Bhattacharyya coefficient. The extended G-JSD is proven to be a $f$-divergence which is a separable divergence satisfying information monotonicity and invariance in information geometry. We derive corresponding closed-form formula for the two types of G-JSDs when considering the case of multivariate Gaussian distributions often met in applications. We consider Monte Carlo stochastic estimations and approximations of the two types of G-JSD using the projective $γ$-divergences. Although the square root of the JSD yields a metric distance, we show that this is not anymore the case for the two types of G-JSD. Finally, we explain how these two types of geometric JSDs can be interpreted as regularizations of the ordinary JSD.
- Asia > Japan (0.40)
- Asia > Middle East > Jordan (0.04)